meta and google
The 'death of creativity'? AI job fears stalk advertising industry
From using motion capture tech to allow the Indian cricketing star Rahul Dravid to give personalised coaching tips for children to an algorithm trained on Shakespeare's handwriting powering a robotic arm to rewrite Romeo and Juliet, artificial intelligence is rapidly revolutionising the global advertising industry. Those AI-created adverts, for the Cadbury's drink brand Bournvita and the pen maker Bic, were produced by agency group WPP, which is spending 300m annually on data, tech and machine learning to remain competitive. Mark Read, the chief executive of the London-listed marketing services group, has said AI is "fundamental" to the future of its business, while admitting that it will drastically reshape the ad industry workforce. Now Read has announced he is to leave at the end of this year, after almost seven years as chief executive and more than 30 at WPP, as the company struggles to keep pace with its peers and also counter moves by big tech to muscle in to the AI-driven future of advertising. For ad agencies, the upheaval originates from a familiar source.
Massive AI energy demand is bringing Three Mile Island back from the dead
Power-hungry generative AI models are quickly making Big Tech sizable energy requirements even more demanding and forcing companies to seek out energy from unlikely places. While Meta and Google are exploring modern geothermal tech and other newer experimental energy sources, Microsoft is stepping back in time. This week, the company signed a 20-year-deal to source energy from the storied Three Mile Island nuclear facility in Pennsylvania, a site once known for the worst reactor accident in US history. If successful, the effort would breathe life back into the iconic symbol of US nuclear power and potentially provide Microsoft with around 800 megawatts of clean-burning energy to help satiate its growing energy appetite. "This agreement is a major milestone in Microsoft's efforts to help decarbonize the grid in support of our commitment to become carbon negative," Microsoft VP of Energy Bobby Hollis, said in a statement.
- North America > United States > Pennsylvania (0.28)
- North America > United States > Michigan (0.05)
- North America > United States > California (0.05)
Nvidia takes on Meta and Google in the speech AI technology race
Join us on November 9 to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers at the Low-Code/No-Code Summit. At Nvidia's Speech AI Summit today, the company discussed its new speech artificial intelligence (AI) ecosystem, which it developed through a partnership with Mozilla Common Voice. The ecosystem focuses on developing crowdsourced multilingual speech corpuses and open-source pretrained models. Nvidia and Mozilla Common Voice aim to accelerate the growth of automatic speech recognition models that work universally for every language speaker worldwide. Nvidia found that standard voice assistants, such as Amazon Alexa and Google Home, support fewer than 1% of the world's spoken languages.
'Chat' with Musk or Trump on AI chatbot
A new chatbot start-up from two top artificial intelligence talents lets anyone strike up a conversation with impersonations of Donald Trump, Elon Musk, Albert Einstein and Sherlock Holmes. Registered users type in messages and get responses. They can also create a chatbot of their own on Character.ai, "There were reports of possible voter fraud and I wanted an investigation," the Trump bot said. The start-up's two founders helped create Google's artificial intelligence project LaMDA, which Google keeps closely guarded while it develops safeguards against social risks.
- Government (0.90)
- Information Technology > Services (0.70)
- Media > News (0.48)
Amazon, Apple, Microsoft, Meta and Google to improve speech recognition for people with disabilities
The University of Illinois (UIUC) has partnered with Amazon, Apple, Google, Meta, Microsoft and nonprofits on the Speech Accessibility Project. The aim is to improve voice recognition for communities with disabilities and diverse speech patterns often not considered by AI algorithms. That includes people with Lou Gehrig's disease (ALS), Parkinson's, cerebral palsy, Down syndrome and other diseases that affect speech. "Speech interfaces should be available to everybody, and that includes people with disabilities," UIUC professor Mark Hasegawa-Johnson said. "This task has been difficult because it requires a lot of infrastructure, ideally the kind that can be supported by leading technology companies, so we've created a uniquely interdisciplinary team with expertise in linguistics, speech, AI, security and privacy."